skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. SLOWPOKE is a new system to accurately quantify the effects of hypothetical optimizations on end-to-end throughput for microservice applications, without relying on tracing or a priori knowledge of the call graph. Microservice operators can use SLOWPOKE to ask what-if performance analysis questions of the form "What throughput could my retail application sustain if I optimized the shopping cart service from 10K req/s to 20K req/s?". Given a target service and its hypothetical optimization, SLOWPOKE employs a perfor- mance model that determines how to selectively slow down non-target services to preserve the relative effect of the optimization. It then performs profiling experiments to predict the end-to-end throughput, as if the optimization had been implemented. Applied to four real-world microservice applications, SLOWPOKE accurately quantifies optimization effects with a root mean squared error of only 2.07%. It is also effective in more complex scenarios, e.g., predicting throughput after scaling optimizations or when bottlenecks arise from mutex contention. Evaluated in large-scale deployments of 45 nodes and 108 synthetic benchmarks, SLOWPOKE further demonstrates its scalability and coverage of a wide range of microservice characteristics. 
    more » « less
    Free, publicly-accessible full text available May 4, 2027
  2. High resolution electron backscatter diffraction (HR-EBSD) is a technique used to map elastic strain, crystallographic orientation and dislocation density in a scanning electron microscope. This review covers the background and mathematics of this technique, contextualizing it within the broader landscape of EBSD techniques and other materials characterization methods. Several case studies are presented showing the application of HR-EBSD to the study of plasticity in metals, failure analysis in microelectronics and defect quantification in thin films. This is intended to be a comprehensive resource for researchers developing this technique as well as an introduction to those wishing to apply it. 
    more » « less
    Free, publicly-accessible full text available March 17, 2027
  3. Free, publicly-accessible full text available February 1, 2027
  4. Generative recommendation (GR) is an emerging paradigm that tokenizes items into discrete tokens and learns to autoregressively generate the next tokens as predictions. While this token-generation paradigm is expected to surpass traditional transductive methods, potentially generating new items directly based on semantics, we empirically show that GR models predominantly generate items seen during training and struggle to recommend unseen items. In this paper, we propose SpecGR, a plug-and-play framework that enables GR models to recommend new items in an inductive setting. SpecGR uses a drafter model with inductive capability to propose candidate items, which may include both existing items and new items. The GR model then acts as a verifier, accepting or rejecting candidates while retaining its strong ranking capabilities. We further introduce the guided re-drafting technique to make the proposed candidates more aligned with the outputs of generative recommendation models, improving verification efficiency. We consider two variants for drafting: (1) using an auxiliary drafter model for better flexibility, or (2) leveraging the GR model’s own encoder for parameterefficient self-drafting. Extensive experiments on three realworld datasets demonstrate that SpecGR exhibits both strong inductive recommendation ability and the best overall performance among the compared methods. 
    more » « less
    Free, publicly-accessible full text available January 20, 2027
  5. Jenkins, C; Taylor, M (Ed.)
    Generative recommendation (GR) is an emerging paradigm that tokenizes items into discrete tokens and learns to autoregressively generate the next tokens as predictions. While this token-generation paradigm is expected to surpass traditional transductive methods, potentially generating new items directly based on semantics, we empirically show that GR models predominantly generate items seen during training and struggle to recommend unseen items. In this paper, we propose SpecGR, a plug-and-play framework that enables GR models to recommend new items in an inductive setting. SpecGR uses a drafter model with inductive capability to propose candidate items, which may include both existing items and new items. The GR model then acts as a verifier, accepting or rejecting candidates while retaining its strong ranking capabilities. We further introduce the guided re-drafting technique to make the proposed candidates more aligned with the outputs of generative recommendation models, improving verification efficiency. We consider two variants for drafting: (1) using an auxiliary drafter model for better flexibility, or (2) leveraging the GR model’s own encoder for parameterefficient self-drafting. Extensive experiments on three realworld datasets demonstrate that SpecGR exhibits both strong inductive recommendation ability and the best overall performance among the compared methods. 
    more » « less
    Free, publicly-accessible full text available January 20, 2027
  6. Free, publicly-accessible full text available February 1, 2027
  7. Free, publicly-accessible full text available January 8, 2027
  8. Free, publicly-accessible full text available February 1, 2027
  9. Free, publicly-accessible full text available December 11, 2026
  10. Free, publicly-accessible full text available February 1, 2027